2 research outputs found

    CLASSIFICATION OF BREAST CANCER HISTOPATHOLOGY IMAGES USING A CONVOLUTIONAL NEURAL NETWORK MODEL

    Get PDF
    Among the different cancers that exist, breast cancer has been identified to account for about 2.26 million new cases among women globally in 2020 according to WHO. The early diagnosis of breast cancer can help reduce the mortality rate. Due to the large volume of breast cancer cases and the limited availability of histopathologists and clinicians, the available ones can be subjective which can lead to misjudgement. An intelligent system that can assist the limited histopathologist is crucial to help in optimal diagnosis. It has been identified that dataset usually came from one source, and a custom CNN was required to train. Therefore, this dissertation aims to employ Convolutional Neural Network models for accurate classification of breast cancer histopathology images curated from different dataset sources. This work utilised two datasets at different magnifications, the BreakHis and the Breast Histopathology dataset. A hybrid dataset was created from these two datasets and divided into 70% and 30% for training and testing. Four pre-trained Convolutional Neural Network (CNN) models (DenseNet201, ResNet50, ResNet101 and MobileNet-v2) were used for the analysis after preprocessing and rescaling. The findings show that DenseNet201 achieved the highest classification accuracy of 88.17%, 87.73%, 92.2% and 91.4% for BreakHis Dataset at 40X, 100X, 200X, 400X magnification factors respectively; 83.67% for Breast Histopathology Dataset at 200X and 85.78% for the Hybrid dataset at 200X. The models were able to classify the images between benign and malignant images, with DenseNet201 giving the best performance in terms of Specificity and Sensitivity at 100%. The implication is that the DenseNet201 model can be used to accurately differentiate between benign and malignant histopathology breast images thus serving as a decision support system in the early diagnosis of breast cancer

    Designing an Adaptive Age-Invariant Face Recognition System for Enhanced Security in Smart Urban Environments

    Get PDF
    The advent of smart technology in urban environments has often been hailed as the solution to a plethora of contemporary urban challenges, ranging from environmental conservation to waste management and transportation. However, the critical aspect of security, encompassing crime detection and prevention, is frequently overlooked. Moreover, there is a dearth of research exploring the potential disruption of conventional face detection and recognition systems by new smart city surveillance security cameras, particularly those which autonomously update their databases. This paper addresses this gap by proposing the enhancement of security in smart cities through the development of an adaptive Age-Invariant Face Recognition (AIFR) model. A non-intrusive AIFR model was constructed using a convolutional neural network and transfer learning techniques, and was then integrated into surveillance cameras. These cameras, designed to capture the faces of city residents at regular intervals, consequently updated their databases autonomously. Upon testing, the developed model demonstrated its potential to substantially improve security by effectively detecting and identifying the residents and visitors of smart cities, and updating their database profiles. Remarkably, the model retained its effectiveness even with significant age intra-class variation, with the capability to alert relevant authorities about potential criminals or missing individuals. This research underscores the potential of adaptive face recognition systems in bolstering security measures within smart urban environments
    corecore